120 research outputs found

    The Need for Robust, Consistent Methods in Societal Exergy Accounting

    Get PDF
    © 2017 The AuthorsStudies of societal exergy use have the common aim of tracing the flow of exergy along society, and are used to gain insights into the efficiency of energy use and linkages to economic growth. However, their methodological approaches vary greatly, with significant impacts on results. Therefore, we make a review of past studies to identify, synthesize and discuss methodological differences, to contribute to a more consistent and robust approach to societal exergy accounting. Issues that should be taken into account when making methodological options are discussed and key insights are presented: (1) For mapping of primary inputs and useful exergy categories, the inclusion of all natural resources is more consistent but it has the cost of not being able to distinguish the various energy end-uses in the production of materials. (2) To estimate primary electricity, none of the methods currently used is able to capture simultaneously the efficiency of the renewable energy sector, the environmental impact and the efficiency of energy use in society. (3) To estimate final-to-useful exergy conversion efficiencies, standard thermodynamic definitions should be used because the use of proxies fails to distinguish between increases in exergy efficiency and increases in the efficiency of providing energy services

    On the use of simulation as a Big Data semantic validator for supply chain management

    Get PDF
    Simulation stands out as an appropriate method for the Supply Chain Management (SCM) field. Nevertheless, to produce accurate simulations of Supply Chains (SCs), several business processes must be considered. Thus, when using real data in these simulation models, Big Data concepts and technologies become necessary, as the involved data sources generate data at increasing volume, velocity and variety, in what is known as a Big Data context. While developing such solution, several data issues were found, with simulation proving to be more efficient than traditional data profiling techniques in identifying them. Thus, this paper proposes the use of simulation as a semantic validator of the data, proposed a classification for such issues and quantified their impact in the volume of data used in the final achieved solution. This paper concluded that, while SC simulations using Big Data concepts and technologies are within the grasp of organizations, their data models still require considerable improvements, in order to produce perfect mimics of their SCs. In fact, it was also found that simulation can help in identifying and bypassing some of these issues.This work has been supported by FCT (Fundacao para a Ciencia e Tecnologia) within the Project Scope: UID/CEC/00319/2019 and by the Doctoral scholarship PDE/BDE/114566/2016 funded by FCT, the Portuguese Ministry of Science, Technology and Higher Education, through national funds, and co-financed by the European Social Fund (ESF) through the Operational Programme for Human Capital (POCH)

    Predictive models for anti-tubercular molecules using machine learning on high-throughput biological screening datasets

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Tuberculosis is a contagious disease caused by <it>Mycobacterium tuberculosis </it>(Mtb), affecting more than two billion people around the globe and is one of the major causes of morbidity and mortality in the developing world. Recent reports suggest that Mtb has been developing resistance to the widely used anti-tubercular drugs resulting in the emergence and spread of multi drug-resistant (MDR) and extensively drug-resistant (XDR) strains throughout the world. In view of this global epidemic, there is an urgent need to facilitate fast and efficient lead identification methodologies. Target based screening of large compound libraries has been widely used as a fast and efficient approach for lead identification, but is restricted by the knowledge about the target structure. Whole organism screens on the other hand are target-agnostic and have been now widely employed as an alternative for lead identification but they are limited by the time and cost involved in running the screens for large compound libraries. This could be possibly be circumvented by using computational approaches to prioritize molecules for screening programmes.</p> <p>Results</p> <p>We utilized physicochemical properties of compounds to train four supervised classifiers (Naïve Bayes, Random Forest, J48 and SMO) on three publicly available bioassay screens of Mtb inhibitors and validated the robustness of the predictive models using various statistical measures.</p> <p>Conclusions</p> <p>This study is a comprehensive analysis of high-throughput bioassay data for anti-tubercular activity and the application of machine learning approaches to create target-agnostic predictive models for anti-tubercular agents.</p

    Artificial Intelligence in Education

    Get PDF
    Artificial Intelligence (AI) technologies have been researched in educational contexts for more than 30 years (Woolf 1988; Cumming and McDougall 2000; du Boulay 2016). More recently, commercial AI products have also entered the classroom. However, while many assume that Artificial Intelligence in Education (AIED) means students taught by robot teachers, the reality is more prosaic yet still has the potential to be transformative (Holmes et al. 2019). This chapter introduces AIED, an approach that has so far received little mainstream attention, both as a set of technologies and as a field of inquiry. It discusses AIED’s AI foundations, its use of models, its possible future, and the human context. It begins with some brief examples of AIED technologies

    Reasoning Under Uncertainty: Towards Collaborative Interactive Machine Learning

    Get PDF
    In this paper, we present the current state-of-the-art of decision making (DM) and machine learning (ML) and bridge the two research domains to create an integrated approach of complex problem solving based on human and computational agents. We present a novel classification of ML, emphasizing the human-in-the-loop in interactive ML (iML) and more specific on collaborative interactive ML (ciML), which we understand as a deep integrated version of iML, where humans and algorithms work hand in hand to solve complex problems. Both humans and computers have specific strengths and weaknesses and integrating humans into machine learning processes might be a very efficient way for tackling problems. This approach bears immense research potential for various domains, e.g., in health informatics or in industrial applications. We outline open questions and name future challenges that have to be addressed by the research community to enable the use of collaborative interactive machine learning for problem solving in a large scale
    corecore